756 research outputs found

    POSEYDON - Converting the DAΦNE collider into a double positron facility: a high duty-cycle pulse stretcher and a storage ring

    Get PDF
    This project proposes to reuse the DAΦNE accelerator complex for producing a high intensity (up to 1010), high-quality beam of high-energy (up to 500 MeV) positrons for HEP experiments, mainly – but not only – motivated by light dark particles searches. Such a facility would provide a unique source of ultra-relativistic, narrow-band and low-emittance positrons, with a high duty factor, without employing a cold technology, that would be an ideal facility for exploring the existence of light dark matter particles, produced in positronon-target annihilations into a photon+missing mass, and using the bump-hunt technique. The PADME experiment, that will use the extracted beam from the DAΦNE BTF, is indeed limited by the low duty-factor (10-5 =200 ns/20 ms). The idea is to use a variant of the third of integer resonant extraction, with the aim of getting a <10-6 m⋅rad emittance and, at the same time, tailoring the scheme to the peculiar optics of the DAΦNE machine. In alternative, the possibility of kicking the positrons by means of channelling effects in crystals can be evaluated. This would not only increase the extraction efficiency but also improve the beam quality, thanks to the high collimation of channelled particles. This is challenging for < GeV leptons, and in particular this would be the first positron beam obtained with crystal-assisted extraction (generally limited to protons and ions). The availability of an intense extracted positron beam with a tuneable pulse length will also enable other applications, ranging from radiation production from crystal undulators to irradiation for aerospace industry. The second ring can be used for storing positrons accelerated by the LINAC, both for producing synchrotron radiation (reversing the polarity of the ring currently used for electrons) and for machine studies with positively charged particles, like for instance instabilities driven by the electron cloud effect

    Ideas for extending the Frascati LINAC positron beam pulses for the resonant search of a X(17 MeV) boson

    Get PDF
    The results on the so-called 8Be anomaly, recently corroborated by similar experimental evidence in the radiative transitions of excited 4He nuclei, could be justified by the creation of a new particle with a mass of mX ≃ 16.7 MeV/c 2. The PADME experiment, designed for searching light dark sector particles, like a dark photon or an axion-like particle, both in γ+ missing energy and e+ e− final states, has the potential of performing a completely independent search, also exploiting the crosssection enhancement at the resonance √8 ≃ mX. In the case of the X(17 MeV) boson, this corresponds to a positron energy of 282 MeV when annihilating on electrons at rest. In order to keep the pile-up and the over-veto probabilities under control, the positron beam hitting the PADME active target should be as much diluted in time as possible. PADME has already collected a first data-set at the Frascati beam-test facility, using the positron beam accelerated by the DAΦNE LINAC, with maximum length of ∼200 ns and energy 490–550 MeV. In this note, the possible modifications to the RF system of the LINAC, aiming at further extending the pulse length at the expenses of the maximum beam energy, are briefly discussed

    Improving application responsiveness with the BFQ disk I/O scheduler

    Get PDF
    BFQ (Budget Fair Queueing) is a production-quality, proportional-share disk scheduler with a relatively large user base. Part of its success is due to a set of simple heuristics that we added to the original algorithm about one year ago. These heuristics are the main focus of this paper. The first heuristic enriches BFQ with one of the most desirable properties for a desktop or handheld system: responsiveness. The remaining heuristics improve the robustness of BFQ across heterogeneous devices, and help BFQ to preserve a high throughput under demanding workloads. To measure the performance of these heuristics we have implemented a suite of micro and macro benchmarks mimicking several real-world tasks, and have run it on three different systems with a single rotational disk. We have also compared our results against Completely Fair Queueing (CFQ), the default Linux disk scheduler

    On service guarantees of fair-queueing schedulers in real systems

    Get PDF
    Abstract In most systems, fair-queueing packet schedulers are the algorithms of choice for providing bandwidth and delay guarantees. These guarantees are computed assuming that the scheduler is directly attached to the transmit unit with no interposed buffering, and, for timestamp-based schedulers, that the exact number of bits transmitted is known when timestamps need to be updated. Unfortunately, both assumptions are unrealistic. In particular, real communication devices normally include FIFO queues (possibly very deep ones) between the scheduler and the transmit unit. And the presence of these queues does invalidate the proofs of the service guarantees of existing timestamp-based fair-queueing schedulers. In this paper we address these issues with the following two contributions. First, we show how to modify timestamp-based, worst-case optimal and quasi-optimal fair-queueing schedulers so as to comply with the presence of FIFO\queues, and with uncertainty on the number of bits transmitted. Second, we provide analytical bounds of the actual guarantees provided, in these real-world conditions, both by modified timestamp-based fair-queueing schedulers and by basic round-robin schedulers. These results should help designers to make informed decisions and sound tradeoffs when building systems

    Evolution of the BFQ Storage-I/O scheduler

    Get PDF
    An accurate storage-I/O scheduler, named Budget Fair Queueing (BFQ), was integrated with a special set of heuristics a few years ago. The resulting, improved scheduler, codenamed BFQ-v1, was able to guarantee a number of desirable service properties, including a high responsiveness, to applications and system services. In the intervening years, BFQ-v1 has become relatively popular on desktop and handheld systems, and has further evolved. But no official, comprehensive and concentrated documentation has been provided about the improvements that have followed each other. In this paper we fill this documentation gap, by describing the current, last version of BFQ (v7r8). We also show the performance of BFQ-v7r8 through some experimental results, in terms of throughput and application responsiveness, and on both an HDD and an SSD

    Linear Accelerator Test Facility at LNF: Conceptual Design Report

    Get PDF
    Test beam and irradiation facilities are the key enabling infrastructures for research in high energy physics (HEP) and astro-particles. In the last 11 years the Beam-Test Facility (BTF) of the DAΦNE accelerator complex in the Frascati laboratory has gained an important role in the European infrastructures devoted to the development and testing of particle detectors. At the same time the BTF operation has been largely shadowed, in terms of resources, by the running of the DAΦNE electron-positron collider. The present proposal is aimed at improving the present performance of the facility from two different points of view: • Extending the range of application for the LINAC beam extracted to the BTF lines, in particular in the (in some sense opposite) directions of hosting fundamental physics and providing electron irradiation also for industrial users; • Extending the life of the LINAC beyond or independently from its use as injector of the DAΦNE collider, as it is also a key element of the electron/positron beam facility. The main lines of these two developments can be identified as: • Consolidation of the LINAC infrastructure, in order to guarantee a stable operation in the longer term; • Upgrade of the LINAC energy, in order to increase the facility capability (especially for the almost unique extracted positron beam); • Doubling of the BTF beam-lines, in order to cope with the signicant increase of users due to the much wider range of applications. Even though such a project stems from a facility already existing and operational since more than a decade, based on an accelerator complex designed more than 20 years ago, it is probably useful considering the resulting infrastructure as a new facility, more than an improvement of the existing DAΦNE BTF: BTF2 or TALYA, Linear Accelerator Test fAcilitY are the suggested names (Fig. 0.1)

    A memory-centric approach to enable timing-predictability within embedded many-core accelerators

    Get PDF
    There is an increasing interest among real-time systems architects for multi- and many-core accelerated platforms. The main obstacle towards the adoption of such devices within industrial settings is related to the difficulties in tightly estimating the multiple interferences that may arise among the parallel components of the system. This in particular concerns concurrent accesses to shared memory and communication resources. Existing worst-case execution time analyses are extremely pessimistic, especially when adopted for systems composed of hundreds-tothousands of cores. This significantly limits the potential for the adoption of these platforms in real-time systems. In this paper, we study how the predictable execution model (PREM), a memory-aware approach to enable timing-predictability in realtime systems, can be successfully adopted on multi- and manycore heterogeneous platforms. Using a state-of-the-art multi-core platform as a testbed, we validate that it is possible to obtain an order-of-magnitude improvement in the WCET bounds of parallel applications, if data movements are adequately orchestrated in accordance with PREM. We identify which system parameters mostly affect the tremendous performance opportunities offered by this approach, both on average and in the worst case, moving the first step towards predictable many-core systems

    Using a lag-balance property to tighten tardiness bounds for global EDF

    Get PDF
    Several tardiness bounds for global EDF and global-EDF-like schedulers have been proposed over the last decade. These bounds contain a component that is explicitly or implicitly proportional to how much the system may be cumulatively lagging behind, in serving tasks, with respect to an ideal schedule. This cumulative lag is in its turn upper-bounded by upper-bounding each per-task component in isolation, and then summing individual per-task bounds. Unfortunately, this approach leads to an over-pessimistic cumulative upper bound. In fact, it does not take into account a lag-balance property of any work-conserving scheduling algorithm. In this paper we show how to get a new tardiness bound for global EDF by integrating this property with the approach used to prove the first tardiness bounds proposed in the literature. In particular, we compute a new tardiness bound for implicit-deadline tasks, scheduled by preemptive global EDF on a symmetric multiprocessor. According to our experiments, as the number of processors increases, this new tardiness bound becomes tighter and tighter than the tightest bound available in the literature, with a maximum tightness improvement of 29 %. A negative characteristic of this new bound is that computing its value takes an exponential time with a brute-force algorithm (no faster exact or approximate algorithm is available yet). As a more general result, the property highlighted in this paper might help to improve the analysis for other scheduling algorithms, possibly on different systems and with other types of task sets. In this respect, our experimental results also point out the following negative fact: existing tardiness bounds for global EDF, including the new bound we propose, may become remarkably loose if every task has a low utilization (ratio between the execution time and the minimum inter-arrival time of the jobs of the task), or if the sum of the utilizations of the tasks is lower than the total capacity of the system

    Direct vs 2-stage approaches to structured motif finding

    Get PDF
    BACKGROUND: The notion of DNA motif is a mathematical abstraction used to model regions of the DNA (known as Transcription Factor Binding Sites, or TFBSs) that are bound by a given Transcription Factor to regulate gene expression or repression. In turn, DNA structured motifs are a mathematical counterpart that models sets of TFBSs that work in concert in the gene regulations processes of higher eukaryotic organisms. Typically, a structured motif is composed of an ordered set of isolated (or simple) motifs, separated by a variable, but somewhat constrained number of “irrelevant” base-pairs. Discovering structured motifs in a set of DNA sequences is a computationally hard problem that has been addressed by a number of authors using either a direct approach, or via the preliminary identification and successive combination of simple motifs. RESULTS: We describe a computational tool, named SISMA, for the de-novo discovery of structured motifs in a set of DNA sequences. SISMA is an exact, enumerative algorithm, meaning that it finds all the motifs conforming to the specifications. It does so in two stages: first it discovers all the possible component simple motifs, then combines them in a way that respects the given constraints. We developed SISMA mainly with the aim of understanding the potential benefits of such a 2-stage approach w.r.t. direct methods. In fact, no 2-stage software was available for the general problem of structured motif discovery, but only a few tools that solved restricted versions of the problem. We evaluated SISMA against other published tools on a comprehensive benchmark made of both synthetic and real biological datasets. In a significant number of cases, SISMA outperformed the competitors, exhibiting a good performance also in most of the cases in which it was inferior. CONCLUSIONS: A reflection on the results obtained lead us to conclude that a 2-stage approach can be implemented with many advantages over direct approaches. Some of these have to do with greater modularity, ease of parallelization, and the possibility to perform adaptive searches of structured motifs. As another consideration, we noted that most hard instances for SISMA were easy to detect in advance. In these cases one may initially opt for a direct method; or, as a viable alternative in most laboratories, one could run both direct and 2-stage tools in parallel, halting the computations when the first halts
    corecore